H20 Scaling for Telluric Correction

Notebook for developing ideas to go into TellRemoval code.

Need to apply scaling of T^x to transmision of water at full resolving power and then apply a kernal to apply in at resolution of CRIRES.

Fit to the observed data (Probably with the other lines removed) to fnd the best x to apply for the correction. (Gives flatest result or zero linewidth.)

In [1]:
### Load modules and Bokeh
# Imports from __future__ in case we're running Python 2
from __future__ import division, print_function
from __future__ import absolute_import, unicode_literals

import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits

# Seaborn, useful for graphics
import seaborn as sns

# Magic function to make matplotlib inline; other style specs must come AFTER
%matplotlib inline

# Import Bokeh modules for interactive plotting
import bokeh.io
import bokeh.mpl
import bokeh.plotting

# This enables SVG graphics inline.  There is a bug, so uncomment if it works.
%config InlineBackend.figure_formats = {'svg',}

# This enables high resolution PNGs. SVG is preferred, but has problems
# rendering vertical and horizontal lines
#%config InlineBackend.figure_formats = {'png', 'retina'}

# JB's favorite Seaborn settings for notebooks
rc = {'lines.linewidth': 1, 
      'axes.labelsize': 14, 
      'axes.titlesize': 16, 
      'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', rc=rc)
sns.set_style('darkgrid', rc=rc)

# Set up Bokeh for inline viewing
bokeh.io.output_notebook()
Loading BokehJS ...

Load in Observed Data

In [2]:
# Need to update these to the vacuum with no berv corrections
chip1 = "CRIRE.2012-04-07T00-08-29.976_1.nod.ms.norm.sum.wavecal.fits"
chip2 = "CRIRE.2012-04-07T00-08-29.976_2.nod.ms.norm.sum.wavecal.fits"
chip3 = "CRIRE.2012-04-07T00-08-29.976_3.nod.ms.norm.sum.wavecal.fits"
chip4 = "CRIRE.2012-04-07T00-08-29.976_4.nod.ms.norm.sum.wavecal.fits" 

Obs1 = fits.getdata(chip1) 
hdr1 = fits.getheader(chip1) 
Obs2 = fits.getdata(chip2) 
hdr2 = fits.getheader(chip2) 
Obs3 = fits.getdata(chip3) 
hdr3 = fits.getheader(chip3) 
Obs4 = fits.getdata(chip4) 
hdr4 = fits.getheader(chip4) 

print("Column names = {}".format(Obs1.columns.names))
wl1 = Obs1["Wavelength"]
I1_uncorr = Obs1["Extracted_DRACS"]

wl2 = Obs2["Wavelength"]
I2_uncorr = Obs2["Extracted_DRACS"]

wl3 = Obs3["Wavelength"]
I3_uncorr = Obs3["Extracted_DRACS"]

wl4 = Obs4["Wavelength"]
I4_uncorr = Obs4["Extracted_DRACS"]

start_airmass = hdr1["HIERARCH ESO TEL AIRM START"]
end_airmass = hdr1["HIERARCH ESO TEL AIRM END"]
obs_airmass = (start_airmass + end_airmass) / 2
print("Data from Detectors is now loaded")
Column names = ['Wavelength', 'Extracted_DRACS', 'Pixel']
Data from Detectors is now loaded
In [3]:
## Rough berv correction until input calibrated file is calibrated with non berv tapas 
In [3]:
wl1 = wl1-.5   #including rough berv correction
wl2 = wl2-.54  #including rough berv correction
wl3 = wl3-.55  #including rough berv correction
wl4 = wl4-.7

Load in the tapas data

In [4]:
import Obtain_Telluric as obt
tapas_all = "tapas_2012-04-07T00-24-03_ReqId_10_R-50000_sratio-10_barydone-NO.ipac"
tapas_h20 = "tapas_2012-04-07T00-24-03_ReqId_12_No_Ifunction_barydone-NO.ipac"
tapas_not_h20 = "tapas_2012-04-07T00-24-03_ReqId_18_R-50000_sratio-10_barydone-NO.ipac"

tapas_all_data, tapas_all_hdr = obt.load_telluric("", tapas_all)
tapas_all_airmass = float(tapas_all_hdr["airmass"])

print("Telluric Airmass ", tapas_all_airmass)
tapas_all_respower = int(float((tapas_all_hdr["respower"])))
print("Telluric Resolution Power =", tapas_all_respower)

tapas_h20_data, tapas_h20_hdr = obt.load_telluric("", tapas_h20)
tapas_h20_airmass = float(tapas_h20_hdr["airmass"])

print("Telluric Airmass ", tapas_h20_airmass)
try:
    tapas_h20_respower = int(float((tapas_h20_hdr["respower"])))
except:
    tapas_h20_respower = "Nan"
print("Telluric Resolution Power =", tapas_h20_respower)

tapas_not_h20_data, tapas_not_h20_hdr = obt.load_telluric("", tapas_not_h20)
tapas_not_h20_airmass = float(tapas_not_h20_hdr["airmass"])

print("Telluric Airmass ", tapas_not_h20_airmass)
tapas_not_h20_respower = int(float((tapas_not_h20_hdr["respower"])))
print("Telluric Resolution Power =", tapas_not_h20_respower)
    
#print(tapas_all_hdr)
Telluric Airmass  1.628051
Telluric Resolution Power = 50000
Telluric Airmass  1.628051
Telluric Resolution Power = Nan
Telluric Airmass  1.628051
Telluric Resolution Power = 50000

Plot the data

Including the 3 tapas models to show they align well and are consistent.

In [5]:
plt.plot(wl1, I1_uncorr, 'b') #including rough berv correction
plt.plot(wl2, I2_uncorr, 'b') #including rough berv correction
plt.plot(wl3, I3_uncorr, 'b') #including rough berv correction
plt.plot(wl4, I4_uncorr, 'b') #including rough berv correction
plt.plot(tapas_h20_data[0], tapas_h20_data[1], "--k", label="H20")
plt.plot(tapas_all_data[0], tapas_all_data[1], "-r", label="all")
plt.plot(tapas_not_h20_data[0], tapas_not_h20_data[1], "-.g", label="Not H20")

#plt.legend()

# Make it interactive with Bokeh
bokeh.plotting.show(bokeh.mpl.to_bokeh())
Out[5]:

<Bokeh Notebook handle for In[5]>

Remove non-H20 lines

(Use telluric removal modules) And plot the result.

In [6]:
from TellRemoval import divide_spectra, airmass_scaling, telluric_correct, match_wl
In [7]:
def correction(wl_obs, spec_obs, wl_tell, spec_tell, obs_airmass, tell_airmass, kind="linear", method="scipy"):
    interped_tell = match_wl(wl_tell, spec_tell, wl_obs)
    scaled_tell = airmass_scaling(interped_tell, tell_airmass, obs_airmass)
    corr_spec = divide_spectra(spec_obs, scaled_tell)                        # Divide by scaled telluric spectra
    return corr_spec
    
#def telluric_correct(wl_obs, spec_obs, wl_tell, spec_tell, obs_airmass, tell_airmass, kind="linear", method="scipy"):
#    return Corrections, Correction_tells, Correction_Bs, Correction_labels

# Corrections, Correction_tells, Correction_Bs, Correction_labels = telluric_correct(wl2, I2_uncorr, tapas_not_h20[0], tapas_not_h20[1], obs_airmass, tapas_not_h20_airmass) 
# Getting zero division error from this function so will try it again from te functions here
tell_airmass = tapas_not_h20_airmass

I1_not_h20_corr = correction(wl1, I1_uncorr, tapas_not_h20_data[0], tapas_not_h20_data[1], obs_airmass, tapas_not_h20_airmass, kind="linear", method="scipy")
I2_not_h20_corr = correction(wl2, I2_uncorr, tapas_not_h20_data[0], tapas_not_h20_data[1], obs_airmass, tapas_not_h20_airmass, kind="linear", method="scipy")
I3_not_h20_corr = correction(wl3, I3_uncorr, tapas_not_h20_data[0], tapas_not_h20_data[1], obs_airmass, tapas_not_h20_airmass, kind="linear", method="scipy")
I4_not_h20_corr = correction(wl4, I4_uncorr, tapas_not_h20_data[0], tapas_not_h20_data[1], obs_airmass, tapas_not_h20_airmass, kind="linear", method="scipy")
        
plt.plot(wl1, I1_uncorr, "b")
plt.plot(wl2, I2_uncorr, "b")
plt.plot(wl3, I3_uncorr, "b")
plt.plot(wl4, I4_uncorr, "b")
plt.plot(wl1, I1_not_h20_corr, "r")
plt.plot(wl2, I2_not_h20_corr, "r")
plt.plot(wl3, I3_not_h20_corr, "r")
plt.plot(wl4, I4_not_h20_corr, "r")
plt.plot(tapas_not_h20_data[0], tapas_not_h20_data[1], "k")
# Make it interactive with Bokeh
bokeh.plotting.show(bokeh.mpl.to_bokeh())
linear scipy interpolation
Interpolation Time = 0.000850915908813 seconds
linear scipy interpolation
Interpolation Time = 0.000297069549561 seconds
linear scipy interpolation
Interpolation Time = 0.000273942947388 seconds
linear scipy interpolation
Interpolation Time = 0.000265121459961 seconds
Out[7]:

<Bokeh Notebook handle for In[7]>

Convole instrument profile function:

To use inside fit

In [9]:
## USEFUL functions from pedros code:

def wav_selector(wav, flux, wav_min, wav_max):
    """
    function that returns wavelength and flux withn a giving range
    """    
    wav_sel = np.array([value for value in wav if(wav_min < value < wav_max)], dtype="float64")
    flux_sel = np.array([value[1] for value in zip(wav,flux) if(wav_min < value[0] < wav_max)], dtype="float64")
    
    return [wav_sel, flux_sel]

def chip_selector(wav, flux, chip):
    chip = str(chip)
    if(chip in ["ALL", "all", "","0"]):
        chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT1"])  # Wavelength start on detector [nm]
        chipmax = float(hdr1["HIERARCH ESO INS WLEN END4"])   # Wavelength end on detector [nm]
        #return [wav, flux]
    elif(chip == "1"):
        chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT1"])  # Wavelength start on detector [nm]
        chipmax = float(hdr1["HIERARCH ESO INS WLEN END1"])   # Wavelength end on detector [nm]
    elif(chip == "2"):
        chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT2"])  # Wavelength start on detector [nm]
        chipmax = float(hdr1["HIERARCH ESO INS WLEN END2"])   # Wavelength end on detector [nm]
    elif(chip == "3"):   
        chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT3"])  # Wavelength start on detector [nm]
        chipmax = float(hdr1["HIERARCH ESO INS WLEN END3"])   # Wavelength end on detector [nm]
    elif(chip == "4"):   
        chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT4"])  # Wavelength start on detector [nm]
        chipmax = float(hdr1["HIERARCH ESO INS WLEN END4"])   # Wavelength end on detector [nm]
    else:
        print("Unrecognized chip tag.")
        exit()
    
    #select values form the chip  
    wav_chip, flux_chip = wav_selector(wav, flux, chipmin, chipmax)
    
    return [wav_chip, flux_chip]

def unitary_Gauss(x, center, FWHM):
    """
    Gaussian_function of area=1
	
	p[0] = A;
	p[1] = mean;
	p[2] = FWHM;
    """
    
    sigma = np.abs(FWHM) /( 2 * np.sqrt(2 * np.log(2)) );
    Amp = 1.0 / (sigma*np.sqrt(2*np.pi))
    tau = -((x - center)**2) / (2*(sigma**2))
    result = Amp * np.exp( tau );
    
    return result

pyasl.instrBroadGaussFast(wvl, flux, resolution, edgeHandling=None, fullout=False, maxsig=None)

Apply Gaussian instrumental broadening.

This function broadens a spectrum assuming a Gaussian kernel. The width of the kernel is determined by the resolution. In particular, the function will determine the mean wavelength and set the Full Width at Half Maximum (FWHM) of the Gaussian to (mean wavelength)/resolution.
Parameters :    
wvl : array
    The wavelength

flux : array
    The spectrum

resolution : int
    The spectral resolution.

edgeHandling : string, {None, “firstlast”}, optional
    Determines the way edges will be handled. If None, nothing will be done about it. If set to “firstlast”, the spectrum will be extended by using the first and last value at the start or end. Note that this is not necessarily appropriate. The default is None.

fullout : boolean, optional
    If True, also the FWHM of the Gaussian will be returned.

maxsig : float, optional
    The extent of the broadening kernel in terms of standrad deviations. By default, the Gaussian broadening kernel will be extended over the entire given spectrum, which can cause slow evaluation in the case of large spectra. A reasonable choice could, e.g., be five.

Returns :    
Broadened spectrum : array
    The input spectrum convolved with a Gaussian kernel.

FWHM : float, optional
    The Full Width at Half Maximum (FWHM) of the used Gaussian kernel.
In [10]:
# Convolution from Pyastronomy
from PyAstronomy import pyasl
#tapas_h20_data[0], tapas_h20_data[1], "0", 50000, FWHM_lim=5.0, plot=True
#ans = pyasl.instrBroadGaussFast(wvl, flux, resolution, edgeHandling=None, fullout=False, maxsig=None)
In [11]:
# Just to my CRIRES range 
    
R = 50000  

chipmin = float(hdr1["HIERARCH ESO INS WLEN STRT1"])  # Wavelength start on detector [nm]
chipmax = float(hdr1["HIERARCH ESO INS WLEN END4"])   # Wavelength end on detector [nm]
wav_chip, flux_chip = wav_selector(tapas_h20_data[0], tapas_h20_data[1], chipmin, chipmax)

FWHM_min = wav_chip[0]/R    #FWHM at the extremes of vector
FWHM_max = wav_chip[-1]/R   

print("Min FWHM", FWHM_min)
print("Max FWHM", FWHM_max)

# pyasl needs equidistant wavelenghts


from Find_gcdt import gcdt
greatest_common_divisor = gcdt(wav_chip, 4)
print("Found divisor = ", greatest_common_divisor)
steps = (wav_chip[-1] - wav_chip[0])/greatest_common_divisor
print("NUmber of steps =", steps)



new_wav = np.linspace(wav_chip[0], wav_chip[-1], num=steps, endpoint=True)


# Inperolate_to_new_wave  
new_flux = match_wl(wav_chip, flux_chip, new_wav)
print("length of new flux", len(new_flux))


diffs = np.diff(wav_chip)
print("First Diff={}, last diff={}".format(diffs[0], diffs[-1]))

data_step = 2 * (wav_chip[-1] - wav_chip[0])/np.min(diffs) 
new_diff_wav = np.linspace(wav_chip[0], wav_chip[-1], num=data_step, endpoint=True)

new_diff_flux = match_wl(wav_chip, flux_chip, new_diff_wav)
print("length of new flux", len(new_diff_flux))

#new_flux = np.interp(new_wav, wav_chip, flux_chip)
Min FWHM 0.0422356803304
Max FWHM 0.0433047189664
('Counter = ', 0)
Found divisor =  0.0001
NUmber of steps = 534519.318021
linear scipy interpolation
Interpolation Time = 0.00531482696533 seconds
length of new flux 534519
First Diff=0.000503485590343, last diff=0.000529295589331
linear scipy interpolation
Interpolation Time = 0.0038149356842 seconds
length of new flux 212327
In [12]:
import time
import datetime
start = time.time()
print("start time", datetime.datetime.now().time())

Conv_flux_pysal = pyasl.instrBroadGaussFast(new_wav, new_flux, 50000, edgeHandling=None, fullout=False, maxsig=None)

done = time.time()
print("end time", datetime.datetime.now().time())
elapsed = done - start
print("Convolution time = ", elapsed)
start time 12:19:34.682956
end time 12:21:17.991537
Convolution time =  103.308576822
In [13]:
plt.plot(tapas_h20_data[0], tapas_h20_data[1],"b")
plt.plot(new_wav,Conv_flux_pysal, "r")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Flux")
plt.title("Test of Pyasl convolution \n (Used interpolation to equidistant points)")
#plt.show()
# Make it interactive with Bokeh
bokeh.plotting.show(bokeh.mpl.to_bokeh())
Out[13]:

<Bokeh Notebook handle for In[13]>

In [15]:
def convolution_nir(wav, flux, chip, R, FWHM_lim=5.0, plot=True):
    """Convolution code adapted from pedros code"""
    
    wav_chip, flux_chip = chip_selector(wav, flux, chip)
    #we need to calculate the FWHM at this value in order to set the starting point for the convolution
    
    print(wav_chip)
    print(flux_chip)
    FWHM_min = wav_chip[0]/R    #FWHM at the extremes of vector
    FWHM_max = wav_chip[-1]/R       
    
    
    #wide wavelength bin for the resolution_convolution
    wav_extended, flux_extended = wav_selector(wav, flux, wav_chip[0]-FWHM_lim*FWHM_min, wav_chip[-1]+FWHM_lim*FWHM_max) 
    wav_extended = np.array(wav_extended, dtype="float64")
    flux_extended = np.array(flux_extended, dtype="float64")
    
    print("Starting the Resolution convolution...")
    
    flux_conv_res = []
    counter = 0    
    for wav in wav_chip:
        # select all values such that they are within the FWHM limits
        FWHM = wav/R
        #print("FWHM of {0} calculated for wavelength {1}".format(FWHM, wav))
        indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
        flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
        IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)
        flux_conv_res.append(np.sum(IP*flux_2convolve))
        if(len(flux_conv_res)%(len(wav_chip)//100 ) == 0):
            counter = counter+1
            print("Resolution Convolution at {}%%...".format(counter))
    flux_conv_res = np.array(flux_conv_res, dtype="float64")
    print("Done.\n")
    
    if(plot):
        fig=plt.figure(1)
        plt.xlabel(r"wavelength [ $\mu$m ])")
        plt.ylabel(r"flux [counts] ")
        plt.plot(wav_chip, flux_chip/np.max(flux_chip), color ='k', linestyle="-", label="Original spectra")
        plt.plot(wav_chip, flux_conv_res/np.max(flux_conv_res), color ='b', linestyle="-", label="Spectrum observed at and R=%d ." % (R))
        plt.legend(loc='best')
        plt.show() 
    return wav_chip, flux_conv_res

print("Done")
Done

Test convolution - without parallelisation

In [16]:
import time
import datetime
start = time.time()
print("start time", start)
print("start time", datetime.datetime.now().time())

x, y = convolution_nir(tapas_h20_data[0], tapas_h20_data[1], "0", 50000, FWHM_lim=5.0, plot=True)
  
done = time.time()
print("end time", datetime.datetime.now().time())
elapsed = done - start
print("Convolution time = ", elapsed)
start time 1463743369.25
start time 12:22:49.251054
[ 2111.78401652  2111.78452     2111.78502349 ...,  2165.23488973
  2165.23541902  2165.23594832]
[ 0.99408276  0.99406469  0.99404643 ...,  0.99879285  0.99879221
  0.99879156]
Starting the Resolution convolution...
Resolution Convolution at 1%%...
Resolution Convolution at 2%%...
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
<ipython-input-16-c5bbc2860aca> in <module>()
      5 print("start time", datetime.datetime.now().time())
      6 
----> 7 x, y = convolution_nir(tapas_h20_data[0], tapas_h20_data[1], "0", 50000, FWHM_lim=5.0, plot=True)
      8 
      9 done = time.time()

<ipython-input-15-d1baa0ecb3bd> in convolution_nir(wav, flux, chip, R, FWHM_lim, plot)
     25         FWHM = wav/R
     26         #print("FWHM of {0} calculated for wavelength {1}".format(FWHM, wav))
---> 27         indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
     28         flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
     29         IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)

KeyboardInterrupt: 

Time on work comp - 188.6781919 s

In [ ]:
 
In [ ]:
plt.plot(tapas_h20_data[0], tapas_h20_data[1],"b")
plt.plot(x,y/np.max(y), "r")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Flux")
plt.title("Test of convolution")
#plt.show()
# Make it interactive with Bokeh
bokeh.plotting.show(bokeh.mpl.to_bokeh())
In [ ]:
 
In [43]:
from math import sqrt
from joblib import Parallel, delayed

def convolve(wav, R, wav_extended, flux_extended, FWHM_lim):
        # select all values such that they are within the FWHM limits
        FWHM = wav/R
        indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
        flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
        IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)
        val = np.sum(IP*flux_2convolve) 
        unitary_val = np.sum(IP*np.ones_like(flux_2convolve))  # Effect of convolution onUnitary. For changing number of points
        return val/unitary_val
    
def parallel_convolution(wav, flux, chip, R, FWHM_lim=5.0, n_jobs=-1, parallel_workers=None):
    """Convolution code adapted from pedros code"""
    
    wav_chip, flux_chip = chip_selector(wav, flux, chip)
    #we need to calculate the FWHM at this value in order to set the starting point for the convolution
    
    #print(wav_chip)
    #print(flux_chip)
    FWHM_min = wav_chip[0]/R    #FWHM at the extremes of vector
    FWHM_max = wav_chip[-1]/R       
    
    #wide wavelength bin for the resolution_convolution
    wav_extended, flux_extended = wav_selector(wav, flux, wav_chip[0]-FWHM_lim*FWHM_min, wav_chip[-1]+FWHM_lim*FWHM_max) 
    wav_extended = np.array(wav_extended, dtype="float64")
    flux_extended = np.array(flux_extended, dtype="float64")
    
    print("Starting the Parallel Resolution convolution...")
    
    flux_conv_res = []
    counter = 0    
    # lambda doesnt work in parallel - it doesn't pickel 
    #lambda_funct = lambda x: convolve(x,R,wav_extended, flux_extended,FWHM_lim)
    #parallel_result = Parallel(n_jobs=-1)(delayed(lambda_funct)(wav) for wav in wav_chip)
    
    #for wav in wav_chip:
    #    a = convolve(wav,R,wav_extended, flux_extended,FWHM_lim)
    #    a = lambda_funct(wav)
    #    flux_conv_res.append(a)
    #    if(len(flux_conv_res)%(len(wav_chip)//100 ) == 0):
    #        counter = counter+1
    #        print("Resolution Convolution at {}%%...".format(counter))
    #flux_conv_res = np.array(flux_conv_res, dtype="float64")
    
    
    # select all values such that they are within the FWHM limits
    #   FWHM = wav/R
    #   indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
    #   flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
    #   IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)
    #   flux_conv_res.append(np.sum(IP*flux_2convolve))
    if parallel_workers:
        # If given workes to use
         parallel_result = parallel_workers(delayed(convolve)(wav,R,wav_extended, flux_extended,FWHM_lim) for wav in wav_chip)
    else:
        parallel_result = Parallel(n_jobs=n_jobs)(delayed(convolve)(wav,R,wav_extended, flux_extended,FWHM_lim) for wav in wav_chip)
    flux_conv_res = np.array(parallel_result, dtype="float64")
    print("Done.\n")
    
    
    return wav_chip, flux_conv_res 
print("function done")
function done

Try parrellel processing for the convolution

from math import sqrt from joblib import Parallel, delayed Parallel(n_jobs=2)(delayed(sqrt)(i ** 2) for i in range(10))

In [45]:
import time
import datetime
start = time.time()
print("start time", datetime.datetime.now().time())

parallel_x, parallel_y = parallel_convolution(tapas_h20_data[0], tapas_h20_data[1], "1", 50000, FWHM_lim=5.0, n_jobs=-1)
  
done = time.time()
print("end time", datetime.datetime.now().time())
elapsed = done - start
print("Convolution time = ", elapsed)


### Need to try running this code as a script not in the notebook to see if it works and is faster.
#Will be benificial if trying to find the best scaling factor

#Maybe good idea to find a general rule of thumb for height/depth of lines need to get to 
start time 16:27:23.310289
Starting the Parallel Resolution convolution...
Done.

end time 16:28:29.606442
Convolution time =  66.2734110355
In [46]:
# Test passing it hte parallel worker
start = time.time()
with Parallel(n_jobs=-1, verbose=1) as parallel:
    par_x, par_y = parallel_convolution(tapas_h20_data[0], tapas_h20_data[1], "1", 50000, FWHM_lim=5.0, n_jobs=-1, parallel_workers=parallel)
  
done = time.time()
print("Convolution time = ", done - start)
Starting the Parallel Resolution convolution...
[Parallel(n_jobs=-1)]: Done 221 tasks      | elapsed:    2.1s
[Parallel(n_jobs=-1)]: Done 3221 tasks      | elapsed:   10.7s
[Parallel(n_jobs=-1)]: Done 8221 tasks      | elapsed:   23.9s
[Parallel(n_jobs=-1)]: Done 15221 tasks      | elapsed:   40.4s
Done.

Convolution time =  58.8783938885
[Parallel(n_jobs=-1)]: Done 23268 out of 23268 | elapsed:   57.0s finished
In [12]:
# Saving a result for comparison

#np.savetxt("Convolved_50000_tapas_wavelength_allchips.txt", parallel_x)
#np.savetxt("Convolved_50000_tapas_transmitance_allchips.txt", parallel_y)

#np.savetxt("Convolved_50000_tapas_wavelength_allchips_dividebynumber.txt", parallel_x)
#np.savetxt("Convolved_50000_tapas_transmitance_allchips_dividebynumber.txt", parallel_y)

Testing Parallel processing convolution times.

Windows laptop

Convolution time = 868.3053071498871 # parallel code 1 process

Convolution time = 981.6766209602356 # parallel 2 jobs, backend="threading"

Convolution time = 899.5289189815521 # parallel -1 jobs, backend="threading"

Convolution time = 2408.0208117961884 # parallel n_jobs=4, backend="threading" ~40min

Convolution time = 983.7938089370728 # n_jobs=1, backend="threading" ~16min

Linux Work comp

Convolution time = 54.9865338802 # n_jobs=-1

Convolution time = 184.560889959 # n_jobs=1 ~ 3 min

Convolution time = 99.8279280663 # n_jobs=2 ~ 1.5 min

Convolution time = 68.0848469734 # n_jobs=3 ~ 1 min

Convolution time = 56.3469331264 # n_jobs=4 < 1 min

Convolution time = 253.075296164 # Work comp # n_jobs=-1, backend="threading"

All chips at once - condition "0"

linux on parallel
Convolution time = 1150.128829s

My conclusion is that joblib does a great job and increase the convolution speed for this task on linux. Threading is not good for this instance.

In [47]:
plt.plot(tapas_h20_data[0], tapas_h20_data[1], "b")
#plt.plot(x,y/np.max(y), "r")
plt.plot(parallel_x, parallel_y, "-r")
plt.xlabel("Wavelength (nm)")
plt.ylabel("Flux")
plt.title("Test of convolutions")
#plt.show()
# Make it interactive with Bokeh
bokeh.plotting.show(bokeh.mpl.to_bokeh())
Out[47]:

<Bokeh Notebook handle for In[44]>

Fit best scaling power.

Does each chip need a differnet scaling power?

In [48]:
from lmfit import minimize, Parameters
import lmfit
In [49]:
from scipy.interpolate import interp1d
def match_wl(wl, spec, ref_wl, method="scipy", kind="linear"):
    """Interpolate Wavelengths of spectra to common WL
    Most likely convert telluric to observed spectra wl after wl mapping performed"""
    starttime = time.time()
    if method == "scipy":
        print(kind + " scipy interpolation")
        linear_interp = interp1d(wl, spec, kind=kind)
        new_spec = linear_interp(ref_wl)
    elif method == "numpy":
        if kind.lower() is not "linear":
            print("Warning: Cannot do " + kind + " interpolation with numpy, switching to linear" )
        print("Linear numpy interpolation")
        new_spec = np.interp(ref_wl, wl, spec)  # 1-d peicewise linear interpolat
    else:
        print("Method was given as " + method)
        raise("Not correct interpolation method specified")
    print("Interpolation Time = " + str(time.time() - starttime) + " seconds")

    return new_spec  # test inperpolations 

def slice_spectra(wl, spectrum, low, high):
    """ Extract a section of a spectrum between wavelength bounds. 
        """
    #print("lower bound", low)
    #print("upper bound", high)
    map1 = wl > low
    map2 = wl < high
    wl_sec = wl[map1*map2]
    spectrum_sec = spectrum[map1*map2]   
    return wl_sec, spectrum_sec 
In [52]:
### Fit using lmfit

def h20_residual(params, obs_data, telluric_data):
    # Parameters 
    ScaleFactor = params["ScaleFactor"].value
    R = params["R"].value
    FWHM_lim = params["FWHM_lim"].value
    n_jobs = params["n_jobs"].value
    chip_select = params["chip_select"].value
    parallel_workers = params["parallel_workers"].value  # Joblib parallel worker
    
    # Data
    obs_wl = obs_data[0]
    obs_I = obs_data[1]
    telluric_wl = telluric_data[0]
    telluric_I = telluric_data[1]
    
    # Telluric scaling T ** x
    scaled_telluric_I = telluric_I ** ScaleFactor
    
    # Convolution
    convolved_telluric = parallel_convolution(telluric_wl, scaled_telluric_I, str(chip_select), R, FWHM_lim=FWHM_lim, n_jobs=n_jobs, parallel_workers=parallel_workers)
    interped_telluric = match_wl(telluric_wl, telluric_I,    obs_wl)
    print("Convolution inside residual function was done")
    
    return 1 - (obs_I / convolved_telluric) 
In [53]:
# Set up parameters 
params = Parameters()
params.add('ScaleFactor', value=1)
params.add('R', value=50000, vary=False)
params.add('FWHM_lim', value=5, vary=False)
params.add('n_jobs', value=-1, vary=False)
params.add('chip_select', value=2, vary=False)
In [62]:
#wl2, I2_uncorr
# wl2, I2_not_h20_corr

# Sliced to wavelength measurement of detector
tell_data1 = slice_spectra(tapas_h20_data[0], tapas_h20_data[1], 0.95*min(wl1), 1.05*max(wl1))
tell_data2 = slice_spectra(tapas_h20_data[0], tapas_h20_data[1], 0.95*min(wl2), 1.05*max(wl2))
tell_data3 = slice_spectra(tapas_h20_data[0], tapas_h20_data[1], 0.95*min(wl3), 1.05*max(wl3))
tell_data4 = slice_spectra(tapas_h20_data[0], tapas_h20_data[1], 0.95*min(wl4), 1.05*max(wl4))

               
print("Number of values to iterate", len(tell_data2[0]))
Number of values to iterate 114081
In [ ]:
# test outside of fit
print("Starting test")
with Parallel(n_jobs=-1, verbose=1) as parallel:
    params.add('parallel_workers', value=parallel, vary=False)
    residual = h20_residual(params,[wl2, I2_not_h20_corr], tell_data2)
Starting test
Starting the Parallel Resolution convolution...
[Parallel(n_jobs=-1)]: Done 440 tasks      | elapsed:    1.8s
[Parallel(n_jobs=-1)]: Done 2840 tasks      | elapsed:   10.3s
In [35]:
# Peform minimization
with Parallel(n_jobs=-1) as parallel:
    params.add('parallel_workers', value=parallel, vary=False)
    out = minimize(h20_residual, params, args=([wl2, I2_not_h20_corr], tell_data2))
outreport = lmfit.fit_report(out)
print(outreport)
CPU times: user 2 µs, sys: 1 µs, total: 3 µs
Wall time: 7.15 µs
Starting the Parallel Resolution convolution...
---------------------------------------------------------------------------
JoblibValueError                          Traceback (most recent call last)
<ipython-input-35-5d6914e803d1> in <module>()
      1 get_ipython().magic(u'time')
      2 # Peform minimization
----> 3 out = minimize(h20_residual, params, args=([wl2, I2_not_h20_corr], tell_data2))
      4 outreport = lmfit.fit_report(out)
      5 print(outreport)

/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.pyc in minimize(fcn, params, method, args, kws, scale_covar, iter_cb, **fit_kws)
    772     fitter = Minimizer(fcn, params, fcn_args=args, fcn_kws=kws,
    773                        iter_cb=iter_cb, scale_covar=scale_covar, **fit_kws)
--> 774     return fitter.minimize(method=method)

/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.pyc in minimize(self, method, params, **kws)
    704         elif user_method.startswith('lbfgsb'):
    705             function = self.lbfgsb
--> 706         return function(**kwargs)
    707 
    708 def minimize(fcn, params, method='leastsq', args=None, kws=None,

/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.pyc in leastsq(self, params, **kws)
    545         np.seterr(all='ignore')
    546 
--> 547         lsout = scipy_leastsq(self.__residual, vars, **lskws)
    548         _best, _cov, infodict, errmsg, ier = lsout
    549         result.aborted = self._abort

/home/jneal/anaconda2/lib/python2.7/site-packages/scipy/optimize/minpack.pyc in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
    375     if not isinstance(args, tuple):
    376         args = (args,)
--> 377     shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
    378     m = shape[0]
    379     if n > m:

/home/jneal/anaconda2/lib/python2.7/site-packages/scipy/optimize/minpack.pyc in _check_func(checker, argname, thefunc, x0, args, numinputs, output_shape)
     24 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     25                 output_shape=None):
---> 26     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
     27     if (output_shape is not None) and (shape(res) != output_shape):
     28         if (output_shape[0] != 1):

/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.pyc in __residual(self, fvars)
    256 
    257         params.update_constraints()
--> 258         out = self.userfcn(params, *self.userargs, **self.userkws)
    259         if callable(self.iter_cb):
    260             abort = self.iter_cb(params, self.result.nfev, out,

<ipython-input-31-8b777c1d9261> in h20_residual(params, obs_data, telluric_data)
     19 
     20     # Convolution
---> 21     convolved_telluric = parallel_convolution(telluric_wl, scaled_telluric_I, str(chip_select), R, FWHM_lim=FWHM_lim, n_jobs=n_jobs)
     22     interped_telluric = match_wl(telluric_wl, telluric_I,    obs_wl)
     23     print("Convolution inside residual function was done")

<ipython-input-10-59cd73d1d25b> in parallel_convolution(wav, flux, chip, R, FWHM_lim, n_jobs)
     53     #   flux_conv_res.append(np.sum(IP*flux_2convolve))
     54 
---> 55     parallel_result = Parallel(n_jobs=n_jobs)(delayed(convolve)(wav,R,wav_extended, flux_extended,FWHM_lim) for wav in wav_chip)
     56     flux_conv_res = np.array(parallel_result, dtype="float64")
     57     print("Done.\n")

/home/jneal/anaconda2/lib/python2.7/site-packages/joblib/parallel.pyc in __call__(self, iterable)
    808                 # consumption.
    809                 self._iterating = False
--> 810             self.retrieve()
    811             # Make sure that we get a last message telling us we are done
    812             elapsed_time = time.time() - self._start_time

/home/jneal/anaconda2/lib/python2.7/site-packages/joblib/parallel.pyc in retrieve(self)
    755                     # a working pool as they expect.
    756                     self._initialize_pool()
--> 757                 raise exception
    758 
    759     def __call__(self, iterable):

JoblibValueError: JoblibValueError
___________________________________________________________________________
Multiprocessing exception:
...........................................................................
/home/jneal/anaconda2/lib/python2.7/runpy.py in _run_module_as_main(mod_name='ipykernel.__main__', alter_argv=1)
    157     pkg_name = mod_name.rpartition('.')[0]
    158     main_globals = sys.modules["__main__"].__dict__
    159     if alter_argv:
    160         sys.argv[0] = fname
    161     return _run_code(code, main_globals, None,
--> 162                      "__main__", fname, loader, pkg_name)
        fname = '/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py'
        loader = <pkgutil.ImpLoader instance>
        pkg_name = 'ipykernel'
    163 
    164 def run_module(mod_name, init_globals=None,
    165                run_name=None, alter_sys=False):
    166     """Execute a module's code without importing it

...........................................................................
/home/jneal/anaconda2/lib/python2.7/runpy.py in _run_code(code=<code object <module> at 0x7fb6a242ceb0, file "/...2.7/site-packages/ipykernel/__main__.py", line 1>, run_globals={'__builtins__': <module '__builtin__' (built-in)>, '__doc__': None, '__file__': '/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py', '__loader__': <pkgutil.ImpLoader instance>, '__name__': '__main__', '__package__': 'ipykernel', 'app': <module 'ipykernel.kernelapp' from '/home/jneal/...python2.7/site-packages/ipykernel/kernelapp.pyc'>}, init_globals=None, mod_name='__main__', mod_fname='/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py', mod_loader=<pkgutil.ImpLoader instance>, pkg_name='ipykernel')
     67         run_globals.update(init_globals)
     68     run_globals.update(__name__ = mod_name,
     69                        __file__ = mod_fname,
     70                        __loader__ = mod_loader,
     71                        __package__ = pkg_name)
---> 72     exec code in run_globals
        code = <code object <module> at 0x7fb6a242ceb0, file "/...2.7/site-packages/ipykernel/__main__.py", line 1>
        run_globals = {'__builtins__': <module '__builtin__' (built-in)>, '__doc__': None, '__file__': '/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py', '__loader__': <pkgutil.ImpLoader instance>, '__name__': '__main__', '__package__': 'ipykernel', 'app': <module 'ipykernel.kernelapp' from '/home/jneal/...python2.7/site-packages/ipykernel/kernelapp.pyc'>}
     73     return run_globals
     74 
     75 def _run_module_code(code, init_globals=None,
     76                     mod_name=None, mod_fname=None,

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/__main__.py in <module>()
      1 
      2 
----> 3 
      4 if __name__ == '__main__':
      5     from ipykernel import kernelapp as app
      6     app.launch_new_instance()
      7 
      8 
      9 
     10 

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/traitlets/config/application.py in launch_instance(cls=<class 'ipykernel.kernelapp.IPKernelApp'>, argv=None, **kwargs={})
    591         
    592         If a global instance already exists, this reinitializes and starts it
    593         """
    594         app = cls.instance(**kwargs)
    595         app.initialize(argv)
--> 596         app.start()
        app.start = <bound method IPKernelApp.start of <ipykernel.kernelapp.IPKernelApp object>>
    597 
    598 #-----------------------------------------------------------------------------
    599 # utility functions, for convenience
    600 #-----------------------------------------------------------------------------

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/kernelapp.py in start(self=<ipykernel.kernelapp.IPKernelApp object>)
    437         
    438         if self.poller is not None:
    439             self.poller.start()
    440         self.kernel.start()
    441         try:
--> 442             ioloop.IOLoop.instance().start()
    443         except KeyboardInterrupt:
    444             pass
    445 
    446 launch_new_instance = IPKernelApp.launch_instance

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/zmq/eventloop/ioloop.py in start(self=<zmq.eventloop.ioloop.ZMQIOLoop object>)
    157             PollIOLoop.configure(ZMQIOLoop)
    158         return PollIOLoop.current(*args, **kwargs)
    159     
    160     def start(self):
    161         try:
--> 162             super(ZMQIOLoop, self).start()
        self.start = <bound method ZMQIOLoop.start of <zmq.eventloop.ioloop.ZMQIOLoop object>>
    163         except ZMQError as e:
    164             if e.errno == ETERM:
    165                 # quietly return on ETERM
    166                 pass

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/tornado/ioloop.py in start(self=<zmq.eventloop.ioloop.ZMQIOLoop object>)
    878                 self._events.update(event_pairs)
    879                 while self._events:
    880                     fd, events = self._events.popitem()
    881                     try:
    882                         fd_obj, handler_func = self._handlers[fd]
--> 883                         handler_func(fd_obj, events)
        handler_func = <function null_wrapper>
        fd_obj = <zmq.sugar.socket.Socket object>
        events = 1
    884                     except (OSError, IOError) as e:
    885                         if errno_from_exception(e) == errno.EPIPE:
    886                             # Happens when the client closes the connection
    887                             pass

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/tornado/stack_context.py in null_wrapper(*args=(<zmq.sugar.socket.Socket object>, 1), **kwargs={})
    270         # Fast path when there are no active contexts.
    271         def null_wrapper(*args, **kwargs):
    272             try:
    273                 current_state = _state.contexts
    274                 _state.contexts = cap_contexts[0]
--> 275                 return fn(*args, **kwargs)
        args = (<zmq.sugar.socket.Socket object>, 1)
        kwargs = {}
    276             finally:
    277                 _state.contexts = current_state
    278         null_wrapper._wrapped = True
    279         return null_wrapper

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py in _handle_events(self=<zmq.eventloop.zmqstream.ZMQStream object>, fd=<zmq.sugar.socket.Socket object>, events=1)
    435             # dispatch events:
    436             if events & IOLoop.ERROR:
    437                 gen_log.error("got POLLERR event on ZMQStream, which doesn't make sense")
    438                 return
    439             if events & IOLoop.READ:
--> 440                 self._handle_recv()
        self._handle_recv = <bound method ZMQStream._handle_recv of <zmq.eventloop.zmqstream.ZMQStream object>>
    441                 if not self.socket:
    442                     return
    443             if events & IOLoop.WRITE:
    444                 self._handle_send()

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py in _handle_recv(self=<zmq.eventloop.zmqstream.ZMQStream object>)
    467                 gen_log.error("RECV Error: %s"%zmq.strerror(e.errno))
    468         else:
    469             if self._recv_callback:
    470                 callback = self._recv_callback
    471                 # self._recv_callback = None
--> 472                 self._run_callback(callback, msg)
        self._run_callback = <bound method ZMQStream._run_callback of <zmq.eventloop.zmqstream.ZMQStream object>>
        callback = <function null_wrapper>
        msg = [<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>]
    473                 
    474         # self.update_state()
    475         
    476 

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/zmq/eventloop/zmqstream.py in _run_callback(self=<zmq.eventloop.zmqstream.ZMQStream object>, callback=<function null_wrapper>, *args=([<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>],), **kwargs={})
    409         close our socket."""
    410         try:
    411             # Use a NullContext to ensure that all StackContexts are run
    412             # inside our blanket exception handler rather than outside.
    413             with stack_context.NullContext():
--> 414                 callback(*args, **kwargs)
        callback = <function null_wrapper>
        args = ([<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>],)
        kwargs = {}
    415         except:
    416             gen_log.error("Uncaught exception, closing connection.",
    417                           exc_info=True)
    418             # Close the socket on an uncaught exception from a user callback

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/tornado/stack_context.py in null_wrapper(*args=([<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>],), **kwargs={})
    270         # Fast path when there are no active contexts.
    271         def null_wrapper(*args, **kwargs):
    272             try:
    273                 current_state = _state.contexts
    274                 _state.contexts = cap_contexts[0]
--> 275                 return fn(*args, **kwargs)
        args = ([<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>],)
        kwargs = {}
    276             finally:
    277                 _state.contexts = current_state
    278         null_wrapper._wrapped = True
    279         return null_wrapper

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/kernelbase.py in dispatcher(msg=[<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>])
    271         if self.control_stream:
    272             self.control_stream.on_recv(self.dispatch_control, copy=False)
    273 
    274         def make_dispatcher(stream):
    275             def dispatcher(msg):
--> 276                 return self.dispatch_shell(stream, msg)
        msg = [<zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>, <zmq.sugar.frame.Frame object>]
    277             return dispatcher
    278 
    279         for s in self.shell_streams:
    280             s.on_recv(make_dispatcher(s), copy=False)

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/kernelbase.py in dispatch_shell(self=<ipykernel.ipkernel.IPythonKernel object>, stream=<zmq.eventloop.zmqstream.ZMQStream object>, msg={'buffers': [], 'content': {u'allow_stdin': True, u'code': u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n', u'silent': False, u'stop_on_error': True, u'store_history': True, u'user_expressions': {}}, 'header': {'date': '2016-05-25T17:19:34.763526', u'msg_id': u'F3309F41DB5249169CEE6085E680D790', u'msg_type': u'execute_request', u'session': u'4FAD574319E2446B8B2BD0A92F45FAE3', u'username': u'username', u'version': u'5.0'}, 'metadata': {}, 'msg_id': u'F3309F41DB5249169CEE6085E680D790', 'msg_type': u'execute_request', 'parent_header': {}})
    223             self.log.error("UNKNOWN MESSAGE TYPE: %r", msg_type)
    224         else:
    225             self.log.debug("%s: %s", msg_type, msg)
    226             self.pre_handler_hook()
    227             try:
--> 228                 handler(stream, idents, msg)
        handler = <bound method IPythonKernel.execute_request of <ipykernel.ipkernel.IPythonKernel object>>
        stream = <zmq.eventloop.zmqstream.ZMQStream object>
        idents = ['4FAD574319E2446B8B2BD0A92F45FAE3']
        msg = {'buffers': [], 'content': {u'allow_stdin': True, u'code': u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n', u'silent': False, u'stop_on_error': True, u'store_history': True, u'user_expressions': {}}, 'header': {'date': '2016-05-25T17:19:34.763526', u'msg_id': u'F3309F41DB5249169CEE6085E680D790', u'msg_type': u'execute_request', u'session': u'4FAD574319E2446B8B2BD0A92F45FAE3', u'username': u'username', u'version': u'5.0'}, 'metadata': {}, 'msg_id': u'F3309F41DB5249169CEE6085E680D790', 'msg_type': u'execute_request', 'parent_header': {}}
    229             except Exception:
    230                 self.log.error("Exception in message handler:", exc_info=True)
    231             finally:
    232                 self.post_handler_hook()

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/kernelbase.py in execute_request(self=<ipykernel.ipkernel.IPythonKernel object>, stream=<zmq.eventloop.zmqstream.ZMQStream object>, ident=['4FAD574319E2446B8B2BD0A92F45FAE3'], parent={'buffers': [], 'content': {u'allow_stdin': True, u'code': u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n', u'silent': False, u'stop_on_error': True, u'store_history': True, u'user_expressions': {}}, 'header': {'date': '2016-05-25T17:19:34.763526', u'msg_id': u'F3309F41DB5249169CEE6085E680D790', u'msg_type': u'execute_request', u'session': u'4FAD574319E2446B8B2BD0A92F45FAE3', u'username': u'username', u'version': u'5.0'}, 'metadata': {}, 'msg_id': u'F3309F41DB5249169CEE6085E680D790', 'msg_type': u'execute_request', 'parent_header': {}})
    386         if not silent:
    387             self.execution_count += 1
    388             self._publish_execute_input(code, parent, self.execution_count)
    389 
    390         reply_content = self.do_execute(code, silent, store_history,
--> 391                                         user_expressions, allow_stdin)
        user_expressions = {}
        allow_stdin = True
    392 
    393         # Flush output before sending the reply.
    394         sys.stdout.flush()
    395         sys.stderr.flush()

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/ipykernel/ipkernel.py in do_execute(self=<ipykernel.ipkernel.IPythonKernel object>, code=u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n', silent=False, store_history=True, user_expressions={}, allow_stdin=True)
    194 
    195         reply_content = {}
    196         # FIXME: the shell calls the exception handler itself.
    197         shell._reply_content = None
    198         try:
--> 199             shell.run_cell(code, store_history=store_history, silent=silent)
        shell.run_cell = <bound method ZMQInteractiveShell.run_cell of <ipykernel.zmqshell.ZMQInteractiveShell object>>
        code = u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n'
        store_history = True
        silent = False
    200         except:
    201             status = u'error'
    202             # FIXME: this code right now isn't being used yet by default,
    203             # because the run_cell() call above directly fires off exception

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py in run_cell(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, raw_cell=u'%time\n# Peform minimization\nout = minimize(h...port = lmfit.fit_report(out)\nprint(outreport)\n', store_history=True, silent=False, shell_futures=True)
   2718                 self.displayhook.exec_result = result
   2719 
   2720                 # Execute the user code
   2721                 interactivity = "none" if silent else self.ast_node_interactivity
   2722                 self.run_ast_nodes(code_ast.body, cell_name,
-> 2723                    interactivity=interactivity, compiler=compiler, result=result)
        interactivity = 'last_expr'
        compiler = <IPython.core.compilerop.CachingCompiler instance>
   2724 
   2725                 # Reset this so later displayed values do not modify the
   2726                 # ExecutionResult
   2727                 self.displayhook.exec_result = None

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py in run_ast_nodes(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, nodelist=[<_ast.Expr object>, <_ast.Assign object>, <_ast.Assign object>, <_ast.Expr object>], cell_name='<ipython-input-35-5d6914e803d1>', interactivity='last', compiler=<IPython.core.compilerop.CachingCompiler instance>, result=<IPython.core.interactiveshell.ExecutionResult object>)
   2820 
   2821         try:
   2822             for i, node in enumerate(to_run_exec):
   2823                 mod = ast.Module([node])
   2824                 code = compiler(mod, cell_name, "exec")
-> 2825                 if self.run_code(code, result):
        self.run_code = <bound method ZMQInteractiveShell.run_code of <ipykernel.zmqshell.ZMQInteractiveShell object>>
        code = <code object <module> at 0x7fb6541c4330, file "<ipython-input-35-5d6914e803d1>", line 3>
        result = <IPython.core.interactiveshell.ExecutionResult object>
   2826                     return True
   2827 
   2828             for i, node in enumerate(to_run_interactive):
   2829                 mod = ast.Interactive([node])

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/IPython/core/interactiveshell.py in run_code(self=<ipykernel.zmqshell.ZMQInteractiveShell object>, code_obj=<code object <module> at 0x7fb6541c4330, file "<ipython-input-35-5d6914e803d1>", line 3>, result=<IPython.core.interactiveshell.ExecutionResult object>)
   2880         outflag = 1  # happens in more places, so it's easier as default
   2881         try:
   2882             try:
   2883                 self.hooks.pre_run_code_hook()
   2884                 #rprint('Running code', repr(code_obj)) # dbg
-> 2885                 exec(code_obj, self.user_global_ns, self.user_ns)
        code_obj = <code object <module> at 0x7fb6541c4330, file "<ipython-input-35-5d6914e803d1>", line 3>
        self.user_global_ns = {'I1_not_h20_corr': array([ 0.83709794,  0.88960309,  0.9283041 , ...,  1.01453655,
        1.01246912,  1.02528331]), 'I1_uncorr': array([ 0.83280522,  0.8855229 ,  0.92419726, .....
        1.00921535,  1.02195954], dtype=float32), 'I2_not_h20_corr': array([ 0.99086277,  0.9900286 ,  0.98910876, ...,  1.00728387,
        1.00894202,  1.01048862]), 'I2_uncorr': array([ 0.98955095,  0.98883367,  0.98798239, .....
        1.00780976,  1.00937068], dtype=float32), 'I3_not_h20_corr': array([ 0.99558404,  0.99684273,  0.99805313, ...,  1.06261848,
        1.06379131,  1.06543165]), 'I3_uncorr': array([ 0.99450547,  0.99567449,  0.99674696, .....
        1.0427134 ,  1.04337955], dtype=float32), 'I4_not_h20_corr': array([ 0.96980131,  0.97714662,  0.97484805, ...,  0.81961869,
        0.83916914,  0.84053006]), 'I4_uncorr': array([ 0.96055734,  0.96704048,  0.96495843, .....
        0.83122015,  0.83080471], dtype=float32), 'In': ['', u'### Load modules and Bokeh\n# Imports from __f...h for inline viewing\nbokeh.io.output_notebook()', u'# Need to update these to the vacuum with no b... / 2\nprint("Data from Detectors is now loaded")', u'wl1 = wl1-.5   #including rough berv correctio...  #including rough berv correction\nwl4 = wl4-.7', u'import Obtain_Telluric as obt\ntapas_all = "ta...s_not_h20_respower)\n    \n#print(tapas_all_hdr)', u'plt.plot(wl1, I1_uncorr, \'b\') #including rou...Bokeh\nbokeh.plotting.show(bokeh.mpl.to_bokeh())', u'from TellRemoval import divide_spectra, airmass_scaling, telluric_correct, match_wl', u'def correction(wl_obs, spec_obs, wl_tell, spec...Bokeh\nbokeh.plotting.show(bokeh.mpl.to_bokeh())', u'## USEFUL functions from pedros code:\n\ndef w... = Amp * np.exp( tau );\n    \n    return result', u'## USEFUL functions from pedros code:\n\ndef w... = Amp * np.exp( tau );\n    \n    return result', u'from math import sqrt\nfrom joblib import Para...wav_chip, flux_conv_res \nprint("function done")', u'import time\nimport datetime\nstart = time.tim... thumb for height/depth of lines need to get to ', u'# Saving a result for comparison\n\nnp.savetxt...itance_allchips_dividebynumber.txt", parallel_y)', u'## Time difference between my slice spectra an..._data[0], tapas_h20_data[1], min(wl2), max(wl2))', u"## Time difference between my slice spectra an...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u'from scipy.interpolate import interp1d\ndef ma...[map1*map2]   \n    return wl_sec, spectrum_sec ', u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"## Time difference between my slice spectra an...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", ...], 'Obs1': FITS_rec([(2111.8376, 0.83280522, 1.0), (2111.84..., ('Extracted_DRACS', '>f4'), ('Pixel', '>f4')])), ...}
        self.user_ns = {'I1_not_h20_corr': array([ 0.83709794,  0.88960309,  0.9283041 , ...,  1.01453655,
        1.01246912,  1.02528331]), 'I1_uncorr': array([ 0.83280522,  0.8855229 ,  0.92419726, .....
        1.00921535,  1.02195954], dtype=float32), 'I2_not_h20_corr': array([ 0.99086277,  0.9900286 ,  0.98910876, ...,  1.00728387,
        1.00894202,  1.01048862]), 'I2_uncorr': array([ 0.98955095,  0.98883367,  0.98798239, .....
        1.00780976,  1.00937068], dtype=float32), 'I3_not_h20_corr': array([ 0.99558404,  0.99684273,  0.99805313, ...,  1.06261848,
        1.06379131,  1.06543165]), 'I3_uncorr': array([ 0.99450547,  0.99567449,  0.99674696, .....
        1.0427134 ,  1.04337955], dtype=float32), 'I4_not_h20_corr': array([ 0.96980131,  0.97714662,  0.97484805, ...,  0.81961869,
        0.83916914,  0.84053006]), 'I4_uncorr': array([ 0.96055734,  0.96704048,  0.96495843, .....
        0.83122015,  0.83080471], dtype=float32), 'In': ['', u'### Load modules and Bokeh\n# Imports from __f...h for inline viewing\nbokeh.io.output_notebook()', u'# Need to update these to the vacuum with no b... / 2\nprint("Data from Detectors is now loaded")', u'wl1 = wl1-.5   #including rough berv correctio...  #including rough berv correction\nwl4 = wl4-.7', u'import Obtain_Telluric as obt\ntapas_all = "ta...s_not_h20_respower)\n    \n#print(tapas_all_hdr)', u'plt.plot(wl1, I1_uncorr, \'b\') #including rou...Bokeh\nbokeh.plotting.show(bokeh.mpl.to_bokeh())', u'from TellRemoval import divide_spectra, airmass_scaling, telluric_correct, match_wl', u'def correction(wl_obs, spec_obs, wl_tell, spec...Bokeh\nbokeh.plotting.show(bokeh.mpl.to_bokeh())', u'## USEFUL functions from pedros code:\n\ndef w... = Amp * np.exp( tau );\n    \n    return result', u'## USEFUL functions from pedros code:\n\ndef w... = Amp * np.exp( tau );\n    \n    return result', u'from math import sqrt\nfrom joblib import Para...wav_chip, flux_conv_res \nprint("function done")', u'import time\nimport datetime\nstart = time.tim... thumb for height/depth of lines need to get to ', u'# Saving a result for comparison\n\nnp.savetxt...itance_allchips_dividebynumber.txt", parallel_y)', u'## Time difference between my slice spectra an..._data[0], tapas_h20_data[1], min(wl2), max(wl2))', u"## Time difference between my slice spectra an...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u'from scipy.interpolate import interp1d\ndef ma...[map1*map2]   \n    return wl_sec, spectrum_sec ', u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"## Time difference between my slice spectra an...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", u"get_ipython().magic(u'timeit slice_spectra(tap...ata[0], tapas_h20_data[1], min(wl2), max(wl2))')", ...], 'Obs1': FITS_rec([(2111.8376, 0.83280522, 1.0), (2111.84..., ('Extracted_DRACS', '>f4'), ('Pixel', '>f4')])), ...}
   2886             finally:
   2887                 # Reset our crash handler in place
   2888                 sys.excepthook = old_excepthook
   2889         except SystemExit as e:

...........................................................................
/home/jneal/Phd/Codes/Phd-codes/Notebooks/H20_scaling/<ipython-input-35-5d6914e803d1> in <module>()
      1 
      2 
----> 3 
      4 get_ipython().magic(u'time')
      5 # Peform minimization
      6 out = minimize(h20_residual, params, args=([wl2, I2_not_h20_corr], tell_data2))
      7 outreport = lmfit.fit_report(out)
      8 print(outreport)
      9 
     10 

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.py in minimize(fcn=<function h20_residual>, params=Parameters([(u'ScaleFactor', <Parameter 'ScaleFa..._select', value=2 (fixed), bounds=[None:None]>)]), method='leastsq', args=([array([ 2126.26660156,  2126.27807617,  2126.289...   2137.55395508,  2137.56469727], dtype=float32), array([ 0.99086277,  0.9900286 ,  0.98910876, ...,  1.00728387,
        1.00894202,  1.01048862])], (array([ 2126.26673956,  2126.26724998,  2126.267...56322487,
        2137.56374072,  2137.56425657]), array([ 0.99754614,  0.99753146,  0.99751645, ...,  0.99923135,
        0.9992343 ,  0.99923725]))), kws=None, scale_covar=True, iter_cb=None, **fit_kws={})
    769     data array, dependent variable, uncertainties in the data, and other
    770     data structures for the model calculation.
    771     """
    772     fitter = Minimizer(fcn, params, fcn_args=args, fcn_kws=kws,
    773                        iter_cb=iter_cb, scale_covar=scale_covar, **fit_kws)
--> 774     return fitter.minimize(method=method)
        fitter.minimize = <bound method Minimizer.minimize of <lmfit.minimizer.Minimizer object>>
        method = 'leastsq'
    775 
    776 
    777 
    778 

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.py in minimize(self=<lmfit.minimizer.Minimizer object>, method='leastsq', params=None, **kws={})
    701         elif (user_method.startswith('nelder') or
    702               user_method.startswith('fmin')):
    703             function = self.fmin
    704         elif user_method.startswith('lbfgsb'):
    705             function = self.lbfgsb
--> 706         return function(**kwargs)
        function = <bound method Minimizer.leastsq of <lmfit.minimizer.Minimizer object>>
        kwargs = {'params': None}
    707 
    708 def minimize(fcn, params, method='leastsq', args=None, kws=None,
    709              scale_covar=True, iter_cb=None, **fit_kws):
    710     """

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.py in leastsq(self=<lmfit.minimizer.Minimizer object>, params=None, **kws={})
    542 
    543         # suppress runtime warnings during fit and error analysis
    544         orig_warn_settings = np.geterr()
    545         np.seterr(all='ignore')
    546 
--> 547         lsout = scipy_leastsq(self.__residual, vars, **lskws)
        lsout = undefined
        self.__residual = undefined
        vars = [1]
        lskws = {'Dfun': None, 'col_deriv': False, 'ftol': 1e-07, 'full_output': 1, 'gtol': 1e-07, 'maxfev': 4000, 'xtol': 1e-07}
    548         _best, _cov, infodict, errmsg, ier = lsout
    549         result.aborted = self._abort
    550         self._abort = False
    551 

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/scipy/optimize/minpack.py in leastsq(func=<bound method Minimizer.__residual of <lmfit.minimizer.Minimizer object>>, x0=array([1]), args=(), Dfun=None, full_output=1, col_deriv=False, ftol=1e-07, xtol=1e-07, gtol=1e-07, maxfev=4000, epsfcn=None, factor=100, diag=None)
    372     """
    373     x0 = asarray(x0).flatten()
    374     n = len(x0)
    375     if not isinstance(args, tuple):
    376         args = (args,)
--> 377     shape, dtype = _check_func('leastsq', 'func', func, x0, args, n)
        shape = undefined
        dtype = undefined
        func = <bound method Minimizer.__residual of <lmfit.minimizer.Minimizer object>>
        x0 = array([1])
        args = ()
        n = 1
    378     m = shape[0]
    379     if n > m:
    380         raise TypeError('Improper input: N=%s must not exceed M=%s' % (n, m))
    381     if epsfcn is None:

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/scipy/optimize/minpack.py in _check_func(checker='leastsq', argname='func', thefunc=<bound method Minimizer.__residual of <lmfit.minimizer.Minimizer object>>, x0=array([1]), args=(), numinputs=1, output_shape=None)
     21 __all__ = ['fsolve', 'leastsq', 'fixed_point', 'curve_fit']
     22 
     23 
     24 def _check_func(checker, argname, thefunc, x0, args, numinputs,
     25                 output_shape=None):
---> 26     res = atleast_1d(thefunc(*((x0[:numinputs],) + args)))
        res = undefined
        thefunc = <bound method Minimizer.__residual of <lmfit.minimizer.Minimizer object>>
        x0 = array([1])
        numinputs = 1
        args = ()
     27     if (output_shape is not None) and (shape(res) != output_shape):
     28         if (output_shape[0] != 1):
     29             if len(output_shape) > 1:
     30                 if output_shape[1] == 1:

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/lmfit/minimizer.py in __residual(self=<lmfit.minimizer.Minimizer object>, fvars=array([1]))
    253         for name, val in zip(self.result.var_names, fvars):
    254             params[name].value = params[name].from_internal(val)
    255         self.result.nfev = self.result.nfev + 1
    256 
    257         params.update_constraints()
--> 258         out = self.userfcn(params, *self.userargs, **self.userkws)
        out = undefined
        self.userfcn = <function h20_residual>
        params = Parameters([(u'ScaleFactor', <Parameter 'ScaleFa..._select', value=2 (fixed), bounds=[None:None]>)])
        self.userargs = ([array([ 2126.26660156,  2126.27807617,  2126.289...   2137.55395508,  2137.56469727], dtype=float32), array([ 0.99086277,  0.9900286 ,  0.98910876, ...,  1.00728387,
        1.00894202,  1.01048862])], (array([ 2126.26673956,  2126.26724998,  2126.267...56322487,
        2137.56374072,  2137.56425657]), array([ 0.99754614,  0.99753146,  0.99751645, ...,  0.99923135,
        0.9992343 ,  0.99923725])))
        self.userkws = {}
    259         if callable(self.iter_cb):
    260             abort = self.iter_cb(params, self.result.nfev, out,
    261                                  *self.userargs, **self.userkws)
    262             self._abort = self._abort or abort

...........................................................................
/home/jneal/Phd/Codes/Phd-codes/Notebooks/H20_scaling/<ipython-input-31-8b777c1d9261> in h20_residual(params=Parameters([(u'ScaleFactor', <Parameter 'ScaleFa..._select', value=2 (fixed), bounds=[None:None]>)]), obs_data=[array([ 2126.26660156,  2126.27807617,  2126.289...   2137.55395508,  2137.56469727], dtype=float32), array([ 0.99086277,  0.9900286 ,  0.98910876, ...,  1.00728387,
        1.00894202,  1.01048862])], telluric_data=(array([ 2126.26673956,  2126.26724998,  2126.267...56322487,
        2137.56374072,  2137.56425657]), array([ 0.99754614,  0.99753146,  0.99751645, ...,  0.99923135,
        0.9992343 ,  0.99923725])))
     16     
     17     # Telluric scaling T ** x
     18     scaled_telluric_I = telluric_data ** ScaleFactor
     19     
     20     # Convolution
---> 21     convolved_telluric = parallel_convolution(telluric_wl, scaled_telluric_I, str(chip_select), R, FWHM_lim=FWHM_lim, n_jobs=n_jobs)
     22     interped_telluric = match_wl(telluric_wl, telluric_I,    obs_wl)
     23     print("Convolution inside residual function was done")
     24     
     25     return 1 - (obs_I / convolved_telluric) 

...........................................................................
/home/jneal/Phd/Codes/Phd-codes/Notebooks/H20_scaling/<ipython-input-10-59cd73d1d25b> in parallel_convolution(wav=array([ 2126.26673956,  2126.26724998,  2126.267...56322487,
        2137.56374072,  2137.56425657]), flux=array([[  2.12626674e+03,   2.12626725e+03,   2....231349e-01,   9.99234303e-01,   9.99237250e-01]]), chip='2', R=50000, FWHM_lim=5, n_jobs=-1)
     50     #   indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
     51     #   flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
     52     #   IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)
     53     #   flux_conv_res.append(np.sum(IP*flux_2convolve))
     54     
---> 55     parallel_result = Parallel(n_jobs=n_jobs)(delayed(convolve)(wav,R,wav_extended, flux_extended,FWHM_lim) for wav in wav_chip)
     56     flux_conv_res = np.array(parallel_result, dtype="float64")
     57     print("Done.\n")
     58     
     59     

...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/joblib/parallel.py in __call__(self=Parallel(n_jobs=-1), iterable=<generator object <genexpr>>)
    805             if pre_dispatch == "all" or n_jobs == 1:
    806                 # The iterable was consumed all at once by the above for loop.
    807                 # No need to wait for async callbacks to trigger to
    808                 # consumption.
    809                 self._iterating = False
--> 810             self.retrieve()
        self.retrieve = <bound method Parallel.retrieve of Parallel(n_jobs=-1)>
    811             # Make sure that we get a last message telling us we are done
    812             elapsed_time = time.time() - self._start_time
    813             self._print('Done %3i out of %3i | elapsed: %s finished',
    814                         (len(self._output), len(self._output),

---------------------------------------------------------------------------
Sub-process traceback:
---------------------------------------------------------------------------
ValueError                                         Wed May 25 17:19:34 2016
PID: 8269                   Python 2.7.11: /home/jneal/anaconda2/bin/python
...........................................................................
/home/jneal/anaconda2/lib/python2.7/site-packages/joblib/parallel.py in __call__(self=<joblib.parallel.BatchedCalls object>)
     67     def __init__(self, iterator_slice):
     68         self.items = list(iterator_slice)
     69         self._size = len(self.items)
     70 
     71     def __call__(self):
---> 72         return [func(*args, **kwargs) for func, args, kwargs in self.items]
        func = <function convolve>
        args = (2126.8293657302665, 50000, array([ 2126.61694192,  2126.61745251,  2126.617...56322487,
        2137.56374072,  2137.56425657]), array([], dtype=float64), 5)
        kwargs = {}
        self.items = [(<function convolve>, (2126.8293657302665, 50000, array([ 2126.61694192,  2126.61745251,  2126.617...56322487,
        2137.56374072,  2137.56425657]), array([], dtype=float64), 5), {})]
     73 
     74     def __len__(self):
     75         return self._size
     76 

...........................................................................
/home/jneal/Phd/Codes/Phd-codes/Notebooks/H20_scaling/<ipython-input-10-59cd73d1d25b> in convolve(wav=2126.8293657302665, R=50000, wav_extended=array([ 2126.61694192,  2126.61745251,  2126.617...56322487,
        2137.56374072,  2137.56425657]), flux_extended=array([], dtype=float64), FWHM_lim=5)
      5         # select all values such that they are within the FWHM limits
      6         FWHM = wav/R
      7         indexes = [ i for i in range(len(wav_extended)) if ((wav - FWHM_lim*FWHM) < wav_extended[i] < (wav + FWHM_lim*FWHM))]
      8         flux_2convolve = flux_extended[indexes[0]:indexes[-1]+1]
      9         IP = unitary_Gauss(wav_extended[indexes[0]:indexes[-1]+1], wav, FWHM)
---> 10         val = np.sum(IP*flux_2convolve) 
     11         unitary_val = np.sum(IP*np.ones_like(flux_2convolve))  # Effect of convolution onUnitary. For changing number of points
     12         return val/unitary_val
     13     
     14 def parallel_convolution(wav, flux, chip, R, FWHM_lim=5.0, n_jobs=-1):

ValueError: operands could not be broadcast together with shapes (833,) (0,) 
___________________________________________________________________________

Timing test of code

In [39]:
## Time difference between my slice spectra and pedros wave selector
%timeit wav_selector(tapas_h20_data[0], tapas_h20_data[1], min(wl1), max(wl4))
The slowest run took 4.89 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 303 ms per loop
In [40]:
%timeit slice_spectra(tapas_h20_data[0], tapas_h20_data[1], min(wl1), max(wl4))
The slowest run took 469.44 times longer than the fastest. This could mean that an intermediate result is being cached.
1 loop, best of 3: 438 µs per loop

Therefore Pedros wav_selector is faster/more efficent than my code. Should adjust my code accordingly. I should probably move this to a different notebook.

Apply correction with best scaling power:

And plot the result.

In [ ]: